Peñuelas
- Asia > Myanmar > Tanintharyi Region > Dawei (0.05)
- Asia > China > Hong Kong (0.04)
- North America > Puerto Rico > Peñuelas > Peñuelas (0.04)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Human Computer Interaction (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.05)
- Asia > China > Hong Kong (0.04)
- North America > Puerto Rico > Peñuelas > Peñuelas (0.04)
Probing the Gaps in ChatGPT Live Video Chat for Real-World Assistance for People who are Blind or Visually Impaired
Chang, Ruei-Che, Natalie, Rosiana, Xu, Wenqian, Yap, Jovan Zheng Feng, Guo, Anhong
Recent advancements in large multimodal models have provided blind or visually impaired (BVI) individuals with new capabilities to interpret and engage with the real world through interactive systems that utilize live video feeds. However, the potential benefits and challenges of such capabilities to support diverse real-world assistive tasks remain unclear. In this paper, we present findings from an exploratory study with eight BVI participants. Participants used ChatGPT's Advanced Voice with Video, a state-of-the-art live video AI released in late 2024, in various real-world scenarios, from locating objects to recognizing visual landmarks, across unfamiliar indoor and outdoor environments. Our findings indicate that current live video AI effectively provides guidance and answers for static visual scenes but falls short in delivering essential live descriptions required in dynamic situations. Despite inaccuracies in spatial and distance information, participants leveraged the provided visual information to supplement their mobility strategies. Although the system was perceived as human-like due to high-quality voice interactions, assumptions about users' visual abilities, hallucinations, generic responses, and a tendency towards sycophancy led to confusion, distrust, and potential risks for BVI users. Based on the results, we discuss implications for assistive video AI agents, including incorporating additional sensing capabilities for real-world use, determining appropriate intervention timing beyond turn-taking interactions, and addressing ecological and safety concerns.
- North America > United States > New York > New York County > New York City (0.16)
- North America > United States > Colorado > Denver County > Denver (0.15)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- (17 more...)
- Research Report > New Finding (1.00)
- Personal > Interview (0.93)
Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
Fichtel, Leandra, Spliethöver, Maximilian, Hüllermeier, Eyke, Jimenez, Patricia, Klowait, Nils, Kopp, Stefan, Ngomo, Axel-Cyrille Ngonga, Robrecht, Amelie, Scharlau, Ingrid, Terfloth, Lutz, Vollmer, Anna-Lisa, Wachsmuth, Henning
The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee's background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee's understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees' understanding before and after the dialogue, as well as their perception of the LLMs' co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees' engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.
- North America > Mexico > Mexico City > Mexico City (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- (16 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
MixAssist: An Audio-Language Dataset for Co-Creative AI Assistance in Music Mixing
Clemens, Michael, Marasović, Ana
While AI presents significant potential for enhancing music mixing and mastering workflows, current research predominantly emphasizes end-to-end automation or generation, often overlooking the collaborative and instructional dimensions vital for co-creative processes. This gap leaves artists, particularly amateurs seeking to develop expertise, underserved. To bridge this, we introduce MixAssist, a novel audio-language dataset capturing the situated, multi-turn dialogue between expert and amateur music producers during collaborative mixing sessions. Comprising 431 audio-grounded conversational turns derived from 7 in-depth sessions involving 12 producers, MixAssist provides a unique resource for training and evaluating audio-language models that can comprehend and respond to the complexities of real-world music production dialogues. Our evaluations, including automated LLM-as-a-judge assessments and human expert comparisons, demonstrate that fine-tuning models such as Qwen-Audio on MixAssist can yield promising results, with Qwen significantly outperforming other tested models in generating helpful, contextually relevant mixing advice. By focusing on co-creative instruction grounded in audio context, MixAssist enables the development of intelligent AI assistants designed to support and augment the creative process in music mixing.
- North America > United States > Utah (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- (6 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.92)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Education (1.00)
Text Production and Comprehension by Human and Artificial Intelligence: Interdisciplinary Workshop Report
This report synthesizes the outcomes of a recent interdisciplinary workshop that brought together leading experts in cognitive psychology, language learning, and artificial intelligence (AI)-based natural language processing (NLP). The workshop, funded by the National Science Foundation, aimed to address a critical knowledge gap in our understanding of the relationship between AI language models and human cognitive processes in text comprehension and composition. Through collaborative dialogue across cognitive, linguistic, and technological perspectives, workshop participants examined the underlying processes involved when humans produce and comprehend text, and how AI can both inform our understanding of these processes and augment human capabilities. The workshop revealed emerging patterns in the relationship between large language models (LLMs) and human cognition, with highlights on both the capabilities of LLMs and their limitations in fully replicating human-like language understanding and generation. Key findings include the potential of LLMs to offer insights into human language processing, the increasing alignment between LLM behavior and human language processing when models are fine-tuned with human feedback, and the opportunities and challenges presented by human-AI collaboration in language tasks. By synthesizing these findings, this report aims to guide future research, development, and implementation of LLMs in cognitive psychology, linguistics, and education. It emphasizes the importance of ethical considerations and responsible use of AI technologies while striving to enhance human capabilities in text comprehension and production through effective human-AI collaboration.
- North America > United States > Iowa (0.04)
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (7 more...)
- Research Report > New Finding (0.93)
- Instructional Material > Course Syllabus & Notes (0.90)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.68)
- Information Technology > Security & Privacy (0.46)
Retrieval-Augmented Generation of Ontologies from Relational Databases
Nayyeri, Mojtaba, Yogi, Athish A, Fathallah, Nadeen, Thapa, Ratan Bahadur, Tautenhahn, Hans-Michael, Schnurpel, Anton, Staab, Steffen
Transforming relational databases into knowledge graphs with enriched ontologies enhances semantic interoperability and unlocks advanced graph-based learning and reasoning over data. However, previous approaches either demand significant manual effort to derive an ontology from a database schema or produce only a basic ontology. We present RIGOR--Retrieval-augmented Iterative Generation of RDB Ontologies--an LLM-driven approach that turns relational schemas into rich OWL ontologies with minimal human effort. RIGOR combines three sources via RAG--the database schema and its documentation, a repository of domain ontologies, and a growing core ontology--to prompt a generative LLM for producing successive, provenance-tagged "delta ontology" fragments. Each fragment is refined by a judge-LLM before being merged into the core ontology, and the process iterates table-by-table following foreign key constraints until coverage is complete.
- Europe > Germany > Baden-Württemberg > Stuttgart Region > Stuttgart (0.04)
- North America > Puerto Rico > Peñuelas > Peñuelas (0.04)
- Europe > Germany > Saxony > Leipzig (0.04)
- (8 more...)
LLMs for Explainable AI: A Comprehensive Survey
Bilal, Ahsan, Ebert, David, Lin, Beiyu
Large Language Models (LLMs) offer a promising approach to enhancing Explainable AI (XAI) by transforming complex machine learning outputs into easy-to-understand narratives, making model predictions more accessible to users, and helping bridge the gap between sophisticated model behavior and human interpretability. AI models, such as state-of-the-art neural networks and deep learning models, are often seen as "black boxes" due to a lack of transparency. As users cannot fully understand how the models reach conclusions, users have difficulty trusting decisions from AI models, which leads to less effective decision-making processes, reduced accountabilities, and unclear potential biases. A challenge arises in developing explainable AI (XAI) models to gain users' trust and provide insights into how models generate their outputs. With the development of Large Language Models, we want to explore the possibilities of using human language-based models, LLMs, for model explainabilities. This survey provides a comprehensive overview of existing approaches regarding LLMs for XAI, and evaluation techniques for LLM-generated explanation, discusses the corresponding challenges and limitations, and examines real-world applications. Finally, we discuss future directions by emphasizing the need for more interpretable, automated, user-centric, and multidisciplinary approaches for XAI via LLMs.
- North America > United States > Oklahoma > Cleveland County > Norman (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > New Jersey > Middlesex County > Piscataway (0.04)
- (5 more...)
- Overview (1.00)
- Research Report > Promising Solution (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
Artificial Conversations, Real Results: Fostering Language Detection with Synthetic Data
Mohammadi, Fatemeh, Romano, Tommaso, Maghool, Samira, Ceravolo, Paolo
Collecting high-quality training data is essential for fine-tuning Large Language Models (LLMs). However, acquiring such data is often costly and time-consuming, especially for non-English languages such as Italian. Recently, researchers have begun to explore the use of LLMs to generate synthetic datasets as a viable alternative. This study proposes a pipeline for generating synthetic data and a comprehensive approach for investigating the factors that influence the validity of synthetic data generated by LLMs by examining how model performance is affected by metrics such as prompt strategy, text length and target position in a specific task, i.e. inclusive language detection in Italian job advertisements. Our results show that, in most cases and across different metrics, the fine-tuned models trained on synthetic data consistently outperformed other models on both real and synthetic test datasets.
- Oceania > New Zealand (0.04)
- North America > Puerto Rico > Peñuelas > Peñuelas (0.04)
- Europe > Italy > Lombardy > Milan (0.04)
Your voice is your voice: Supporting Self-expression through Speech Generation and LLMs in Augmented and Alternative Communication
Xu, Yiwen, Chakraborti, Monideep, Zhang, Tianyi, Eng, Katelyn, Mohan, Aanchan, Prpa, Mirjana
In this paper, we present Speak Ease: an augmentative and alternative communication (AAC) system to support users' expressivity by integrating multimodal input, including text, voice, and contextual cues (conversational partner and emotional tone), with large language models (LLMs). Speak Ease combines automatic speech recognition (ASR), context-aware LLM-based outputs, and personalized text-to-speech technologies to enable more personalized, natural-sounding, and expressive communication. Through an exploratory feasibility study and focus group evaluation with speech and language pathologists (SLPs), we assessed Speak Ease's potential to enable expressivity in AAC. The findings highlight the priorities and needs of AAC users and the system's ability to enhance user expressivity by supporting more personalized and contextually relevant communication. This work provides insights into the use of multimodal inputs and LLM-driven features to improve AAC systems and support expressivity.
- North America > United States > New York > New York County > New York City (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (3 more...)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study (0.46)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Education (1.00)